Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Prescreening is a methodology where forensic examiners select samples similar to given trace evidence to represent the background population. This background evidence helps assign a value of evidence using a likelihood ratio or Bayes factor. A key advantage of prescreening is its ability to mitigate effects from subpopulation structures within the alternative source population by isolating the relevant subpopulation. This paper examines the impact of prescreening before assigning evidence value. Extensive simulations with synthetic and real data, including trace element and fingerprint score examples, were conducted. The findings indicate that prescreening can provide an accurate evidence value in cases of subpopulation structures but may also yield more extreme or dampened evidence values within specific subpopulations. The study suggests that prescreening is beneficial for presenting evidence relative to the subpopulation of interest, provided the prescreening method and level are transparently reported alongside the evidence value.more » « less
-
Abstract The field of forensic statistics offers a unique hierarchical data structure in which a population is composed of several subpopulations of sources and a sample is collected from each source. This subpopulation structure creates an additional layer of complexity. Hence, the data has a hierarchical structure in addition to the existence of underlying subpopulations. Finite mixtures are known for modeling heterogeneity; however, previous parameter estimation procedures assume that the data is generated through a simple random sampling process. We propose using a semiāsupervised mixture modeling approach to model the subpopulation structure which leverages the fact that we know the collection of samples came from the same source, yet an unknown subpopulation. A simulation study and a real data analysis based on famous glass datasets and a keystroke dynamic typing data set show that the proposed approach performs better than other approaches that have been used previously in practice.more » « less
-
null (Ed.)Insurance premiums reflect expectations about the future losses of each insured. Given the dearth of cyber security loss data, market premiums could shed light on the true magnitude of cyber losses despite noise from factors unrelated to losses. To that end, we extract cyber insurance pricing information from the regulatory filings of 26 insurers. We provide empirical observations on how premiums vary by coverage type, amount, and policyholder type and over time. A method using particle swarm optimisation and the expected value premium principle is introduced to iterate through candidate parameterised distributions with the goal of reducing error in predicting observed prices. We then aggregate the inferred loss models across 6,828 observed prices from all 26 insurers to derive the County Fair Cyber Loss Distribution . We demonstrate its value in decision support by applying it to a theoretical retail firm with annual revenue of $50M. The results suggest that the expected cyber liability loss is $428K and that the firm faces a 2.3% chance of experiencing a cyber liability loss between $100K and $10M each year. The method and resulting estimates could help organisations better manage cyber risk, regardless of whether they purchase insurance.more » « less
An official website of the United States government
